744 research outputs found

    Which Surrogate Works for Empirical Performance Modelling? A Case Study with Differential Evolution

    Full text link
    It is not uncommon that meta-heuristic algorithms contain some intrinsic parameters, the optimal configuration of which is crucial for achieving their peak performance. However, evaluating the effectiveness of a configuration is expensive, as it involves many costly runs of the target algorithm. Perhaps surprisingly, it is possible to build a cheap-to-evaluate surrogate that models the algorithm's empirical performance as a function of its parameters. Such surrogates constitute an important building block for understanding algorithm performance, algorithm portfolio/selection, and the automatic algorithm configuration. In principle, many off-the-shelf machine learning techniques can be used to build surrogates. In this paper, we take the differential evolution (DE) as the baseline algorithm for proof-of-concept study. Regression models are trained to model the DE's empirical performance given a parameter configuration. In particular, we evaluate and compare four popular regression algorithms both in terms of how well they predict the empirical performance with respect to a particular parameter configuration, and also how well they approximate the parameter versus the empirical performance landscapes

    Evolutionary Multiobjective Optimization Driven by Generative Adversarial Networks (GANs)

    Get PDF
    Recently, increasing works have proposed to drive evolutionary algorithms using machine learning models. Usually, the performance of such model based evolutionary algorithms is highly dependent on the training qualities of the adopted models. Since it usually requires a certain amount of data (i.e. the candidate solutions generated by the algorithms) for model training, the performance deteriorates rapidly with the increase of the problem scales, due to the curse of dimensionality. To address this issue, we propose a multi-objective evolutionary algorithm driven by the generative adversarial networks (GANs). At each generation of the proposed algorithm, the parent solutions are first classified into real and fake samples to train the GANs; then the offspring solutions are sampled by the trained GANs. Thanks to the powerful generative ability of the GANs, our proposed algorithm is capable of generating promising offspring solutions in high-dimensional decision space with limited training data. The proposed algorithm is tested on 10 benchmark problems with up to 200 decision variables. Experimental results on these test problems demonstrate the effectiveness of the proposed algorithm

    Evolutionary methods for modelling and control of linear and nonlinear systems

    Get PDF
    The aim of this work is to explore the potential and enhance the capability of evolutionary computation for the development of novel and advanced methodologies for engineering system modelling and controller design automation. The key to these modelling and design problems is optimisation. Conventional calculus-based methods currently adopted in engineering optimisation are in essence local search techniques, which require derivative information and lack of robustness in solving practical engineering problems. One objective of this research is thus to develop an effective and reliable evolutionary algorithm for engineering applications. For this, a hybrid evolutionary algorithm is developed, which combines the global search power of a "generational" EA with the interactive local fine-tuning of Boltzmann learning. It overcomes the weakness in local exploration and chromosome stagnation usually encountered in pure EAs. A novel one-integer-one-parameter coding scheme is also developed to significantly reduce the quantisation error, chromosome length and processing overhead time. An "Elitist Direct Inheritance" technique is developed to incorporate with Bolzmann learning for reducing the control parameters and convergence time of EAs. Parallelism of the hybrid EA is also realised in this thesis with nearly linear pipelinability. Generic model reduction and linearisation techniques in L2 and L∞ norms are developed based on the hybrid EA technique. They are applicable to both discrete and continuous-time systems in both the time and the frequency domains. Superior to conventional model reduction methods, the EA based techniques are capable of simultaneously recommending both an optimal order number and optimal parameters by a control gene used as a structural switch. This approach is extended to MIMO system linearisation from both a non-linear model and I/O data of the plant. It also allows linearisation for an entire operating region with the linear approximate-model network technique studied in this thesis. To build an original model, evolutionary black-box and clear-box system identification techniques are developed based on the L2 norm. These techniques can identify both the system parameters and transport delay in the same evolution process. These open-loop identification methods are further extended to closed-loop system identification. For robust control, evolutionary L∞ identification techniques are developed. Since most practical systems are nonlinear in nature and it is difficult to model the dominant dynamics of such a system while retaining neglected dynamics for accuracy, evolutionary grey-box modelling techniques are proposed. These techniques can utilise physical law dominated global clearbox structure, with local black-boxes to include unmeasurable nonlinearities as the coefficient models of the clear-box. This unveils a new way of engineering system modelling. With an accurately identified model, controller design problems still need to be overcome. Design difficulties by conventional analytical and numerical means are discussed and a design automation technique is then developed. This is again enabled by the hybrid evolutionary algorithm in this thesis. More importantly, this technique enables the unification of linear control system designs in both the time and the frequency domains under performance satisfaction. It is also extended to control along a trajectory of operating points for nonlinear systems. In addition, a multi-objective evolutionary algorithm is developed to make the design more transparent and visible. To achieve a step towards autonomy in building control systems, a technique for direct designs from plant step response data is developed, which bypasses the system identification phase. These computer-automated intelligent design methodologies are expected to offer added productivity and quality of control systems

    Unleashing the Potential of Spiking Neural Networks for Sequential Modeling with Contextual Embedding

    Full text link
    The human brain exhibits remarkable abilities in integrating temporally distant sensory inputs for decision-making. However, existing brain-inspired spiking neural networks (SNNs) have struggled to match their biological counterpart in modeling long-term temporal relationships. To address this problem, this paper presents a novel Contextual Embedding Leaky Integrate-and-Fire (CE-LIF) spiking neuron model. Specifically, the CE-LIF model incorporates a meticulously designed contextual embedding component into the adaptive neuronal firing threshold, thereby enhancing the memory storage of spiking neurons and facilitating effective sequential modeling. Additionally, theoretical analysis is provided to elucidate how the CE-LIF model enables long-term temporal credit assignment. Remarkably, when compared to state-of-the-art recurrent SNNs, feedforward SNNs comprising the proposed CE-LIF neurons demonstrate superior performance across extensive sequential modeling tasks in terms of classification accuracy, network convergence speed, and memory capacity

    EvoX: A Distributed GPU-accelerated Library towards Scalable Evolutionary Computation

    Full text link
    During the past decades, evolutionary computation (EC) has demonstrated promising potential in solving various complex optimization problems of relatively small scales. Nowadays, however, ongoing developments in modern science and engineering are bringing increasingly grave challenges to the conventional EC paradigm in terms of scalability. As problem scales increase, on the one hand, the encoding spaces (i.e., dimensions of the decision vectors) are intrinsically larger; on the other hand, EC algorithms often require growing numbers of function evaluations (and probably larger population sizes as well) to work properly. To meet such emerging challenges, not only does it require delicate algorithm designs, but more importantly, a high-performance computing framework is indispensable. Hence, we develop a distributed GPU-accelerated algorithm library -- EvoX. First, we propose a generalized workflow for implementing general EC algorithms. Second, we design a scalable computing framework for running EC algorithms on distributed GPU devices. Third, we provide user-friendly interfaces to both researchers and practitioners for benchmark studies as well as extended real-world applications. To comprehensively assess the performance of EvoX, we conduct a series of experiments, including: (i) scalability test via numerical optimization benchmarks with problem dimensions/population sizes up to millions; (ii) acceleration test via a neuroevolution task with multiple GPU nodes; (iii) extensibility demonstration via the application to reinforcement learning tasks on the OpenAI Gym. The code of EvoX is available at https://github.com/EMI-Group/EvoX
    • …
    corecore